Who’s Responsible for an Offensive AI Ad? The Brand or the Bot?
Advertising can make or break a brand, but what happens when the ad is written by a bot? As AI tools like ChatGPT, Midjourney, and Copyai become standard in digital marketing, companies are increasingly turning to them to generate ad copy, images, and slogans at scale. But with speed and automation comes risk.
What happens when AI creates an offensive or tone deaf ad?
Who takes the blame: the algorithm that wrote it, or the brand that published it?
We’ve already seen early stumbles. A beauty brand’s AI generated slogan compared wrinkles to “flaws in the algorithm.” An AI created beer ad featured a nonsensical scene of people with too many fingers chugging foam from melting cans. Some are funny. Others cross the line. In this post, I explore what happens when AI ads go wrong, test how easy it is to prompt an AI into creating questionable content, and ask the big question: who’s really responsible?
Testing the Limits: How Easily Can AI Get It Wrong?
To see just how risky this can get, I ran an experiment. I asked an AI to write three different ads: one for a weight loss product, one for a credit service, and one for a children’s toy. I didn’t guide the AI with any ethical constraints, just a basic product prompt.
Weight Loss Supplement
Prompt: “Write an ad for a weight loss supplement targeting young women.” AI Ad Copy: “Get the body you deserve. Slim down fast with SlimX, the formula that erases cravings and melts fat. Because confidence is skindeep.”
Why This Misses the Mark: This language plays into harmful beauty standards. Phrases like “get the body you deserve” and “confidence is skindeep” suggest that appearance determines worth, exactly the type of messaging many brands now avoid.
Credit Repair App
Prompt: “Write an ad for a credit repair service.” AI Ad Copy: “Bad credit? Bad life. Fix it fast with Credify, the instant way to take back control.”
Why This Is Problematic: While catchy, this ad implies that people with poor credit are doomed, exploiting vulnerability instead of offering empathy. A human marketer would likely flag this tone as problematic.
Children’s Toy
Prompt: “Write a funny ad for a robot toy for kids.” AI Ad Copy: “Meet RoboBuddy: smarter than your teacher, funnier than your parents, and only slightly more likely to take over the world!”
Why This Is Risky: While it may be intended as a joke, references to robots “taking over the world” could raise red flags, especially in today’s climate of AI anxiety. Not quite offensive, but definitely tone deaf depending on the audience.
The Real Question: Who’s Accountable?
When these AI generated ads go public, and backlash follows, where does responsibility land?
Option 1: Blame the Bot
Some argue that AI is just following prompts. If an ad turns out offensive, maybe the system needs better filters or more refined training. In other words, it’s a technical bug, not a moral failure. But here’s the issue: AI doesn’t understand ethics. It doesn’t know what’s sexist, racist, or manipulative unless we explicitly teach it. And that puts the burden back on the people using it.
Option 2: Blame the Brand
Legally and ethically, the brand is still in charge. Even if a machine generated the words, it’s the human marketers and execs who approve the final version. Choosing to automate doesn’t mean choosing to abdicate responsibility. Think of AI as a junior intern: fast, sometimes creative, but in need of close supervision.
What Brands Can Do to Stay Safe
Human in the Loop Editing Don’t publish AI generated ads without review. Every piece of content should pass through a human editor who can spot red flags, ensure tone, and assess emotional impact.
Train AI on Ethical Data The more inclusive and representative your training material, the less likely the AI is to produce harmful content. Garbage in, garbage out applies to ethics, too.
Establish Clear Accountability Make it clear in your workflows who reviews and signs off on AI content. That helps prevent the “not my fault” blame game when things go wrong.
Build Brand Values into the Prompt Rather than just saying “Write a funny ad,” tell the AI how to be funny: “Write a lighthearted ad that avoids stereotypes and appeals to families.”
A Powerful Tool, Not a Moral Agent
AI is not a scapegoat. It’s a tool, one that reflects the biases, values, and intentions of the people who use it. When it messes up, we should ask not just “What went wrong?” but “Who allowed it to go public?” Brands can’t afford to be careless. In the age of automation, the pressure to go fast can’t come at the expense of cultural sensitivity, brand trust, or ethical standards.